Goto

Collaborating Authors

 count distribution





Weighted Model Counting in FO2 with Cardinality Constraints and Counting Quantifiers: A Closed Form Formula

Malhotra, Sagar, Serafini, Luciano

arXiv.org Artificial Intelligence

Weighted First-Order Model Counting (WFOMC) computes the weighted sum of the models of a first-order logic theory on a given finite domain. First-Order Logic theories that admit polynomial-time WFOMC w.r.t domain cardinality are called domain liftable. We introduce the concept of lifted interpretations as a tool for formulating closed-forms for WFOMC. Using lifted interpretations, we reconstruct the closed-form formula for polynomial-time FOMC in the universally quantified fragment of FO2, earlier proposed by Beame et al. We then expand this closed-form to incorporate cardinality constraints, existential quantifiers, and counting quantifiers (a.k.a C2) without losing domain-liftability. Finally, we show that the obtained closed-form motivates a natural definition of a family of weight functions strictly larger than symmetric weight functions.


Lifted Inference in 2-Variable Markov Logic Networks with Function and Cardinality Constraints Using Discrete Fourier Transform

Kuzelka, Ondrej

arXiv.org Artificial Intelligence

Markov logic networks (MLNs, Richardson and Domingos, 2006) are a statistical relational learning [Getoor and Taskar, 2007] framework for probabilistic modelling of complex relational structures such as social and biological networks, molecules etc. In general, inference in MLNs is intractable. Lifted inference refers to a set of methods developed in the literature which exploit symmetries for making probabilistic inference more tractable, e.g.


Markov Logic Networks with Complex Weights: Expressivity, Liftability and Fourier Transforms

Kuzelka, Ondrej

arXiv.org Artificial Intelligence

Statistical Relational Learning [Getoor and Taskar, 2007] (SRL) is concerned with learning probabilistic models from relational data such as, for instance, knowledge graphs, biological or social networks, structures of molecules etc. Markov Logic Networks [Richardson and Domingos, 2006] (MLNs) are among the most prominent SRL systems and in this paper we are interested in their expressivity. Informally, expressivity measures the "amount" of distributions that can be modelled by a given class of probabilistic models. An MLN is given by a set of weighted first-order logic formulas and it defines a distribution on possible worlds over a given domain. Here we study expressivity of MLNs in a setting where we first fix the first-order logic formulas defining the MLN and then vary their weights. Since it is not even clear what expressivity should mean in this context, our first contribution in this paper is a formal framework for studying expressivity of MLNs. The main reason for studying expressivity of MLNs in the setting where one first fixes the formulas is computational complexity of inference because its complexity usually depends mostly on the formulas and not so much on their weights.